This repository contains code for adversarial robustness training and testing both quantum machine learning (QML) models and classical deep learning models on various image classification tasks. 

1. Project Structure

The repository is organized into four main Jupyter notebooks:

Train_baselines.ipynb: Used for training classical deep learning models (ResNet9, ViT and MLP).

Test_baselines.ipynb: Used for evaluating the performance of the trained classical models under adversarial attacks.
 
Train_QML.ipynb: Used for training the QML model.

Test_QML.ipynb: Used for evaluating the performance of the trained QML model under adversarial attacks.

2. Hyperparameters

- L_cons: the construction attack bound which represents the theoretical robustness guarantee in terms of l_2 norm.
- epsilon: the privacy loss in given DP budget.
- total_delta: the failure probability in given DP budget.
- nums_loops: the number of training reruns used to report averaged performance.
- q_noise: the amount of quantum noise added. Only applicable for QML.
- classical_mech: the classical DP mechanism to add classical noise.
- dataset_name: the dataset name used for training or evaluation.
- attack_name: the adversarial attack name during evaluation.
- L_{attk}: the attack strength measured in l_inf norm.

3. How to run

- Classical Models: Open and run the Train_baselines.ipynb notebook to train the ResNet9, ViT and MLP models. After training, use the Test_baselines.ipynb notebook to evaluate the performance of the saved classical models. You can vary the hyperparameters to reproduce different experimental settings as presented in the paper.

- Quantum Model: Open and run the Train_QML.ipynb notebook to train the QML model. Then, use the Test_QML.ipynb notebook to evaluate the trained QML model. You can vary the hyperparameters to reproduce different experimental settings as presented in the paper.

4. How It Works

The training notebooks will save the trained model weights to the models/ directory. The testing notebooks will then load these saved models to perform evaluation and report accuracy metrics. The code also includes functionality for applying adversarial attacks to test the robustness of the models.